Current Issue : January - March Volume : 2021 Issue Number : 1 Articles : 5 Articles
Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-touse\ndeep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide\nautodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a\nneural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects\nare written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a\nsoftware library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are\nplentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers,\nloss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the\nrobustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network\nemulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and\nradiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the modelâ??s emergent\nbehavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a\npreviously unrecognized strong relationship between offline validation error and online performance, in which the choice of the\noptimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable\nimprovements in climate model stability including some with reduced error, for an especially challenging training dataset....
Software reliability is an important quality attribute, and software reliability\nmodels are frequently used to measure and predict software maturity. The\nnature of mobile environments differs from that of PC and server environments\ndue to many factors, such as the network, energy, battery, and compatibility.\nEvaluating and predicting mobile application reliability are real challenges\nbecause of the diversity of the mobile environments in which the applications\nare used, and the lack of publicly available defect data. In addition,\nbug reports are optionally submitted by end-users. In this paper, we propose\nassessing and predicting the reliability of a mobile application using known\nsoftware reliability growth models (SRGMs)....
The main objective of this research is to discuss the current legal and methodological\nissues in the field of software Re-Usability. Though there are\nenormous online forums discussing such issues via Q&A but this paper is an\nattempt to raise the awareness about the legal issues, which a software engineer\nmay trap into. The paper discussed the current issues with software reusability\nwithin the legal and methodological context. This paper applied an\nextensive literature review to critically appraise the past studies to come to a\ncollective conclusion. Prior to discussing the issues, the benefits of reuse were\nmentioned, including the saving of time and cost for users....
Energy consumption has been one of the main concerns to support the rapid growth of cloud data centers, as it not only increases\nthe cost of electricity to service providers but also plays an important role in increasing greenhouse gas emissions and thus\nenvironmental pollution, and has a negative impact on system reliability and availability. As a result, energy consumption and\nefficiency metrics have become a vital issue for parallel scheduling applications based on tasks performed at cloud data centers. In\nthis paper, we present a time and energy-aware two-phase scheduling algorithm called best heuristic scheduling (BHS) for directed\nacyclic graph (DAG) scheduling on cloud data center processors. In the first phase, the algorithm allocates resources to tasks by\nsorting, based on four heuristic methods and a grasshopper algorithm. It then selects the most appropriate method to perform\neach task, based on the importance factor determined by the end-user or service provider to achieve a solution designed at the\nright time. In the second phase, BHS minimizes the makespan and energy consumption according to the importance factor\ndetermined by the end-user or service provider and taking into account the start time, setup time, end time, and energy profile of\nvirtual machines....
There has been an explosion in the volume of data that is being accessed from\nthe Internet. As a result, the risk of a Web server being inundated with requests\nis ever-present. One approach to reducing the performance degradation\nthat potentially comes from Web server overloading is to employ Web\ncaching where data content is replicated in multiple locations. In this paper,\nwe investigate the use of evolutionary algorithms to dynamically alter partition\nsize in Web caches. We use established modeling techniques to compare\nthe performance of our evolutionary algorithm to that found in statically-\npartitioned systems. Our results indicate that utilizing an evolutionary algorithm\nto dynamically alter partition sizes can lead to performance improvements\nespecially in environments where the relative size of large to\nsmall pages is high....
Loading....